Financial Time Series Models (ARCH/GARCH)

Plotting Data

Code
library('quantmod')
getSymbols("UPS", from="2010-01-01", src="yahoo")
[1] "UPS"
Code
head(UPS)
           UPS.Open UPS.High UPS.Low UPS.Close UPS.Volume UPS.Adjusted
2010-01-04    58.18    58.82   57.98     58.18    3897200     37.63986
2010-01-05    58.25    59.00   58.12     58.28    5966300     37.70455
2010-01-06    58.21    58.27   57.81     57.85    5770200     37.42636
2010-01-07    57.96    57.96   57.19     57.41    5747000     37.14170
2010-01-08    59.77    61.13   59.52     60.17   13779300     38.92730
2010-01-11    60.55    63.38   60.50     62.82   13744900     40.64173
Code
#any(is.na(UPS))

ups.close<- Ad(UPS)

# candlestick plot of the UPS prices
chartSeries(UPS, type = "candlesticks",theme='white')

Code
# returns plot
returns_ups = diff(log(ups.close))
chartSeries(returns_ups, theme="white")

The data exhibits non-stationarity and displays volatility clustering in the returns, particularly evident around the period from 2020 to 2022.

Code
library('quantmod')
getSymbols("JBHT", from="2010-01-01", src="yahoo")
[1] "JBHT"
Code
head(JBHT)
           JBHT.Open JBHT.High JBHT.Low JBHT.Close JBHT.Volume JBHT.Adjusted
2010-01-04     32.56     33.11    32.24      33.06     1876000      28.72956
2010-01-05     33.05     33.15    32.46      33.15     2186900      28.80777
2010-01-06     32.97     33.30    32.85      32.95     1147200      28.63396
2010-01-07     32.88     33.26    32.62      33.08     1272400      28.74694
2010-01-08     33.12     34.14    33.12      34.04     2068200      29.58119
2010-01-11     34.04     34.90    33.98      34.54     1897100      30.01568
Code
#any(is.na(UPS))

jbht.close<- Ad(JBHT)

# candlestick plot of the UPS prices
chartSeries(JBHT, type = "candlesticks",theme='white')

Code
# returns plot
returns_jbht = diff(log(jbht.close))
chartSeries(returns_jbht, theme="white")

The data exhibits non-stationarity and displays volatility clustering in the returns, particularly evident around the period from 2020 to 2022.

ACF and PACF Plots

Code
# returns_ups
ggAcf(returns_ups, na.action = na.pass) 

Code
ggPacf(returns_ups, na.action = na.pass) 

The returns are weakly stationary with little instances of high correlations. p=4, q=4. It needs ARIMA model.

Code
ggAcf(returns_ups^2, na.action = na.pass) 

Code
ggPacf(returns_ups^2, na.action = na.pass) 

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.

Code
# returns_jbht
ggAcf(returns_jbht, na.action = na.pass) 

Code
ggPacf(returns_jbht, na.action = na.pass) 

The returns are weakly stationary with little instances of high correlations. p=1,4, q=1,4,5. It needs ARIMA model.

Code
ggAcf(returns_jbht^2, na.action = na.pass) 

Code
ggPacf(returns_jbht^2, na.action = na.pass) 

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.

Fitting ARIMA

Code
# returns_ups
log.ups <- log(ups.close)
ggtsdisplay(diff(log.ups))

The model for “differenced log data” series is thus a white noise, and the “original model” resembles random walk model ARIMA(0,1,0).

Code
######################## Check for different combinations ########


d=0
i=1
temp= data.frame()
ls=matrix(rep(NA,6*100),nrow=100) # roughly nrow = 5x5x2


for (p in 1:6)# p=1,2,
{
  for(q in 1:6)# q=1,2,
  {
    for(d in 0:1)# 
    {
      
      if(p-1+d+q-1<=8)
      {
        
        model<- Arima(log.ups,order=c(p-1,d,q-1),include.drift=TRUE) 
        ls[i,]= c(p-1,d,q-1,model$aic,model$bic,model$aicc)
        i=i+1
        #print(i)
        
      }
      
    }
  }
}

temp= as.data.frame(ls)
names(temp)= c("p","d","q","AIC","BIC","AICc")

temp[which.min(temp$AIC),] #0,1,0
   p d q       AIC       BIC      AICc
43 3 0 3 -20178.72 -20123.05 -20178.67
Code
temp[which.min(temp$BIC),]#0,1,0
  p d q       AIC       BIC      AICc
2 0 1 0 -20155.12 -20142.74 -20155.11
Code
temp[which.min(temp$AICc),]
   p d q       AIC       BIC      AICc
43 3 0 3 -20178.72 -20123.05 -20178.67

The lowest AIC model is ARIMA(3,0,3).

Code
# model diagnostics
sarima(log.ups, 0,1,0)
initial  value -4.226615 
iter   1 value -4.226615
final  value -4.226615 
converged
initial  value -4.226615 
iter   1 value -4.226615
final  value -4.226615 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
         Estimate    SE t.value p.value
constant    4e-04 2e-04  1.5556  0.1199

sigma^2 estimated as 0.0002132107 on 3589 degrees of freedom 
 
AIC = -5.614238  AICc = -5.614238  BIC = -5.610792 
 

Code
sarima(log.ups, 5,0,3)
initial  value -0.791711 
iter   2 value -4.047115
iter   3 value -4.050940
iter   4 value -4.207225
iter   5 value -4.226742
iter   6 value -4.227080
iter   7 value -4.227405
iter   8 value -4.227452
iter   9 value -4.227478
iter  10 value -4.227520
iter  11 value -4.227703
iter  12 value -4.228076
iter  13 value -4.228890
iter  14 value -4.229145
iter  15 value -4.229363
iter  16 value -4.229561
iter  17 value -4.229591
iter  18 value -4.229612
iter  19 value -4.229615
iter  20 value -4.229620
iter  20 value -4.229620
final  value -4.229620 
converged
initial  value -4.227046 
iter   2 value -4.227048
iter   3 value -4.227070
iter   4 value -4.227087
iter   5 value -4.227141
iter   6 value -4.227224
iter   7 value -4.227408
iter   8 value -4.227573
iter   9 value -4.227742
iter  10 value -4.227786
iter  11 value -4.227789
iter  12 value -4.227791
iter  13 value -4.227810
iter  14 value -4.227835
iter  15 value -4.227875
iter  16 value -4.227896
iter  17 value -4.227904
iter  18 value -4.227905
iter  19 value -4.227906
iter  20 value -4.227907
iter  21 value -4.227907
iter  22 value -4.227907
iter  23 value -4.227908
iter  24 value -4.227908
iter  25 value -4.227913
iter  26 value -4.227924
iter  27 value -4.227926
iter  28 value -4.227934
iter  29 value -4.227936
iter  30 value -4.227938
iter  31 value -4.227940
iter  32 value -4.227942
iter  33 value -4.227944
iter  34 value -4.227945
iter  35 value -4.227946
iter  36 value -4.227947
iter  37 value -4.227947
iter  38 value -4.227947
iter  39 value -4.227947
iter  39 value -4.227947
final  value -4.227947 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
      Estimate     SE t.value p.value
ar1     0.6402 0.0990  6.4681  0.0000
ar2    -0.0559 0.1290 -0.4333  0.6648
ar3    -0.3082 0.1230 -2.5050  0.0123
ar4     0.6686 0.1138  5.8733  0.0000
ar5     0.0547 0.0177  3.0918  0.0020
ma1     0.3463 0.0983  3.5224  0.0004
ma2     0.4166 0.0732  5.6908  0.0000
ma3     0.7244 0.1066  6.7984  0.0000
xmean   4.4777 0.5389  8.3090  0.0000

sigma^2 estimated as 0.0002121959 on 3582 degrees of freedom 
 
AIC = -5.612448  AICc = -5.612434  BIC = -5.595221 
 

According to the model diagnostics, ARIMA(0,1,0) is the better model.

Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.

Code
# returns_jbht
log.jbht <- log(jbht.close)
ggtsdisplay(diff(log.jbht))

The model for “differenced log data” series is thus a white noise, and the “original model” resembles random walk model ARIMA(0,1,0).

Code
######################## Check for different combinations ########


d=0
i=1
temp= data.frame()
ls=matrix(rep(NA,6*100),nrow=100) # roughly nrow = 5x5x2


for (p in 1:6)# p=1,2,
{
  for(q in 1:6)# q=1,2,
  {
    for(d in 0:1)# 
    {
      
      if(p-1+d+q-1<=8)
      {
        
        model<- Arima(log.jbht,order=c(p-1,d,q-1),include.drift=TRUE) 
        ls[i,]= c(p-1,d,q-1,model$aic,model$bic,model$aicc)
        i=i+1
        #print(i)
        
      }
      
    }
  }
}

temp= as.data.frame(ls)
names(temp)= c("p","d","q","AIC","BIC","AICc")

temp[which.min(temp$AIC),] #0,1,0
   p d q       AIC       BIC      AICc
41 3 0 2 -19336.13 -19286.65 -19336.09
Code
temp[which.min(temp$BIC),]#0,1,0
  p d q       AIC       BIC      AICc
2 0 1 0 -19310.12 -19297.75 -19310.12
Code
temp[which.min(temp$AICc),]
   p d q       AIC       BIC      AICc
41 3 0 2 -19336.13 -19286.65 -19336.09

The lowest AIC model is ARIMA(3,0,2).

Code
# model diagnostics
sarima(log.jbht, 0,1,0)
initial  value -4.108928 
iter   1 value -4.108928
final  value -4.108928 
converged
initial  value -4.108928 
iter   1 value -4.108928
final  value -4.108928 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
         Estimate    SE t.value p.value
constant    5e-04 3e-04  1.9063  0.0567

sigma^2 estimated as 0.0002697929 on 3589 degrees of freedom 
 
AIC = -5.378865  AICc = -5.378864  BIC = -5.375418 
 

Code
sarima(log.jbht, 3,0,2)
initial  value -0.618808 
iter   2 value -0.677764
iter   3 value -0.828776
iter   4 value -1.767697
iter   5 value -2.620695
iter   6 value -3.714623
iter   7 value -3.793820
iter   8 value -4.033062
iter   9 value -4.079833
iter  10 value -4.097624
iter  11 value -4.102843
iter  12 value -4.109163
iter  13 value -4.109191
iter  14 value -4.109193
iter  15 value -4.109193
iter  16 value -4.109198
iter  17 value -4.109208
iter  18 value -4.109249
iter  19 value -4.109996
iter  20 value -4.110005
iter  21 value -4.110104
iter  22 value -4.110270
iter  23 value -4.110285
iter  24 value -4.110287
iter  25 value -4.110302
iter  26 value -4.110306
iter  27 value -4.110308
iter  28 value -4.110308
iter  29 value -4.110312
iter  30 value -4.110319
iter  31 value -4.110341
iter  32 value -4.110408
iter  33 value -4.110411
iter  34 value -4.110454
iter  35 value -4.110516
iter  36 value -4.110527
iter  37 value -4.110530
iter  38 value -4.110552
iter  39 value -4.110563
iter  40 value -4.110570
iter  41 value -4.110573
iter  42 value -4.110578
iter  43 value -4.110590
iter  44 value -4.110623
iter  45 value -4.110710
iter  46 value -4.110716
iter  47 value -4.110741
iter  48 value -4.110822
iter  49 value -4.110859
iter  50 value -4.110860
iter  51 value -4.110967
iter  52 value -4.110994
iter  53 value -4.111022
iter  54 value -4.111114
iter  55 value -4.111228
iter  56 value -4.111458
iter  57 value -4.111633
iter  58 value -4.111683
iter  59 value -4.111684
iter  60 value -4.111685
iter  61 value -4.111690
iter  62 value -4.111706
iter  63 value -4.111716
iter  64 value -4.111720
iter  65 value -4.111720
iter  65 value -4.111720
iter  65 value -4.111720
final  value -4.111720 
converged
initial  value -4.110351 
iter   2 value -4.110351
iter   3 value -4.110357
iter   4 value -4.110450
iter   5 value -4.110492
iter   6 value -4.110498
iter   7 value -4.110500
iter   8 value -4.110514
iter   9 value -4.110535
iter  10 value -4.110566
iter  11 value -4.110585
iter  12 value -4.110595
iter  13 value -4.110596
iter  14 value -4.110597
iter  14 value -4.110597
final  value -4.110597 
converged
<><><><><><><><><><><><><><>
 
Coefficients: 
      Estimate     SE  t.value p.value
ar1    -0.6804 0.0548 -12.4157       0
ar2     0.8165 0.0432  18.9006       0
ar3     0.8628 0.0764  11.2902       0
ma1     1.6492 0.0609  27.0799       0
ma2     0.8232 0.0851   9.6779       0
xmean   4.4852 0.5295   8.4703       0

sigma^2 estimated as 0.0002683415 on 3585 degrees of freedom 
 
AIC = -5.379419  AICc = -5.379412  BIC = -5.36736 
 

According to the model diagnostics, ARIMA(3,0,2) is the better model.

Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.

GARCH Model on ARIMA Residuals

Code
# returns_ups
fit.ups <- Arima(log.ups, order=c(0,1,0))
summary(fit.ups)
Series: log.ups 
ARIMA(0,1,0) 

sigma^2 = 0.0002134:  log likelihood = 10078.34
AIC=-20154.68   AICc=-20154.68   BIC=-20148.5

Training set error measures:
                       ME       RMSE         MAE         MPE      MAPE
Training set 0.0003808867 0.01460477 0.009700401 0.008393482 0.2164048
                  MASE        ACF1
Training set 0.9998257 -0.01479948
Code
res.ups <- fit.ups$res
ggtsdisplay(res.ups^2)

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.

Code
model <- list() ## set counter
cc <- 1
for (p in 1:6) {
  for (q in 1:6) {
  
model[[cc]] <- garch(res.ups,order=c(q,p),trace=F)
cc <- cc + 1
}
} 

## get AIC values for model evaluation
GARCH_AIC <- sapply(model, AIC) ## model with lowest AIC is the best
which(GARCH_AIC == min(GARCH_AIC))
[1] 5
Code
model[[which(GARCH_AIC == min(GARCH_AIC))]]

Call:
garch(x = res.ups, order = c(q, p), trace = F)

Coefficient(s):
       a0         a1         b1         b2         b3         b4         b5  
5.820e-06  1.783e-01  9.359e-07  2.795e-02  1.530e-01  3.433e-01  2.979e-01  
Code
library(fGarch)
summary(garchFit(~garch(2,5), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(2, 5), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(2, 5)
<environment: 0x14d5c0c50>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1      alpha2       beta1       beta2  
3.4080e-04  2.3452e-06  6.5935e-02  8.0254e-03  1.0000e-08  1.0000e-08  
     beta3       beta4       beta5  
1.5514e-01  5.5495e-01  2.0726e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.408e-04   2.047e-04    1.665   0.0959 .  
omega  2.345e-06         NaN      NaN      NaN    
alpha1 6.593e-02   1.049e-02    6.286 3.27e-10 ***
alpha2 8.025e-03         NaN      NaN      NaN    
beta1  1.000e-08         NaN      NaN      NaN    
beta2  1.000e-08   6.860e-02    0.000   1.0000    
beta3  1.551e-01   1.039e-01    1.494   0.1352    
beta4  5.549e-01   6.468e-02    8.580  < 2e-16 ***
beta5  2.073e-01         NaN      NaN      NaN    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10380.61    normalized:  2.89073 

Description:
 Thu Apr 11 16:09:25 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1.684132e+04 0.0000000
 Shapiro-Wilk Test  R    W      9.090654e-01 0.0000000
 Ljung-Box Test     R    Q(10)  1.285319e+01 0.2319963
 Ljung-Box Test     R    Q(15)  2.227021e+01 0.1009120
 Ljung-Box Test     R    Q(20)  2.551239e+01 0.1825264
 Ljung-Box Test     R^2  Q(10)  5.918621e+00 0.8220534
 Ljung-Box Test     R^2  Q(15)  6.796500e+00 0.9630525
 Ljung-Box Test     R^2  Q(20)  8.153851e+00 0.9908000
 LM Arch Test       R    TR^2   6.170443e+00 0.9072473

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.776447 -5.760942 -5.776459 -5.770921 

Alpha2 and beta1,2,3,5 are not significant. So I will try GARCH(1,4) and ARCH(1).

Code
summary(garchFit(~garch(1,4), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 4), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 4)
<environment: 0x151e42898>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1       beta2       beta3  
3.3748e-04  1.9458e-06  6.1328e-02  1.6404e-01  1.0000e-08  2.1648e-01  
     beta4  
5.5097e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.375e-04   2.048e-04    1.648 0.099332 .  
omega  1.946e-06   5.721e-07    3.401 0.000671 ***
alpha1 6.133e-02   9.333e-03    6.571 5.00e-11 ***
beta1  1.640e-01   1.243e-01    1.319 0.187093    
beta2  1.000e-08   1.024e-01    0.000 1.000000    
beta3  2.165e-01   1.736e-01    1.247 0.212518    
beta4  5.510e-01   1.161e-01    4.744 2.09e-06 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10379.95    normalized:  2.890546 

Description:
 Thu Apr 11 16:09:26 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1.741128e+04 0.0000000
 Shapiro-Wilk Test  R    W      9.082065e-01 0.0000000
 Ljung-Box Test     R    Q(10)  1.289984e+01 0.2293259
 Ljung-Box Test     R    Q(15)  2.222985e+01 0.1019172
 Ljung-Box Test     R    Q(20)  2.558735e+01 0.1798770
 Ljung-Box Test     R^2  Q(10)  5.098596e+00 0.8844951
 Ljung-Box Test     R^2  Q(15)  6.156418e+00 0.9769988
 Ljung-Box Test     R^2  Q(20)  7.572598e+00 0.9943390
 LM Arch Test       R    TR^2   5.461723e+00 0.9407557

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.777193 -5.765134 -5.777200 -5.772894 
Code
summary(garchFit(~garch(1,0), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 0), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 0)
<environment: 0x150daba00>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1  
0.00040293  0.00015118  0.35544699  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     4.029e-04   2.154e-04     1.87   0.0615 .  
omega  1.512e-04   4.896e-06    30.88   <2e-16 ***
alpha1 3.554e-01   3.664e-02     9.70   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10236.05    normalized:  2.850473 

Description:
 Thu Apr 11 16:09:26 2024 by user:  


Standardised Residuals Tests:
                                   Statistic     p-Value
 Jarque-Bera Test   R    Chi^2  1.163698e+04 0.000000000
 Shapiro-Wilk Test  R    W      9.101881e-01 0.000000000
 Ljung-Box Test     R    Q(10)  2.436457e+01 0.006689329
 Ljung-Box Test     R    Q(15)  3.472692e+01 0.002688895
 Ljung-Box Test     R    Q(20)  3.718505e+01 0.011118171
 Ljung-Box Test     R^2  Q(10)  2.655987e+01 0.003055868
 Ljung-Box Test     R^2  Q(15)  3.528159e+01 0.002241790
 Ljung-Box Test     R^2  Q(20)  4.502224e+01 0.001095834
 LM Arch Test       R    TR^2   2.700501e+01 0.007714383

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.699275 -5.694107 -5.699277 -5.697433 

The best model above with the smallest AIC is GARCH(1,4). So for UPS stock price the best model is ARIMA(0,1,0)+GRACH(1,4).

Code
# returns_jbht
fit.jbht <- Arima(log.jbht, order=c(3,0,2))
summary(fit.jbht)
Series: log.jbht 
ARIMA(3,0,2) with non-zero mean 

Coefficients:
          ar1     ar2     ar3     ma1     ma2    mean
      -0.6804  0.8165  0.8628  1.6492  0.8232  4.4852
s.e.   0.0548  0.0432  0.0764  0.0609  0.0851  0.5295

sigma^2 = 0.0002688:  log likelihood = 9665.75
AIC=-19317.49   AICc=-19317.46   BIC=-19274.19

Training set error measures:
                       ME       RMSE        MAE        MPE      MAPE      MASE
Training set 0.0005153677 0.01638113 0.01181423 0.01093111 0.2688153 0.9998603
                     ACF1
Training set -0.004793327
Code
res.jbht <- fit.jbht$res
ggtsdisplay(res.jbht^2)

The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.

Code
model <- list() ## set counter
cc <- 1
for (p in 1:6) {
  for (q in 1:6) {
  
model[[cc]] <- garch(res.jbht,order=c(q,p),trace=F)
cc <- cc + 1
}
} 

## get AIC values for model evaluation
GARCH_AIC <- sapply(model, AIC) ## model with lowest AIC is the best
which(GARCH_AIC == min(GARCH_AIC))
[1] 1
Code
model[[which(GARCH_AIC == min(GARCH_AIC))]]

Call:
garch(x = res.jbht, order = c(q, p), trace = F)

Coefficient(s):
       a0         a1         b1  
5.734e-06  4.526e-02  9.329e-01  
Code
library(fGarch)
summary(garchFit(~garch(2,1), res.jbht,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(2, 1), data = res.jbht, trace = F) 

Mean and Variance Equation:
 data ~ garch(2, 1)
<environment: 0x150944f00>
 [data = res.jbht]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1      alpha2       beta1  
4.8985e-04  5.6881e-06  4.4749e-02  1.0000e-08  9.3353e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     4.899e-04   2.468e-04    1.985  0.04713 *  
omega  5.688e-06   1.438e-06    3.955 7.67e-05 ***
alpha1 4.475e-02   1.605e-02    2.789  0.00529 ** 
alpha2 1.000e-08   1.734e-02    0.000  1.00000    
beta1  9.335e-01   1.102e-02   84.710  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 9863.911    normalized:  2.746842 

Description:
 Thu Apr 11 16:09:27 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1596.5163904 0.0000000
 Shapiro-Wilk Test  R    W         0.9726394 0.0000000
 Ljung-Box Test     R    Q(10)     6.5611206 0.7661259
 Ljung-Box Test     R    Q(15)    10.6465256 0.7772255
 Ljung-Box Test     R    Q(20)    11.7818633 0.9233794
 Ljung-Box Test     R^2  Q(10)     4.5976111 0.9163889
 Ljung-Box Test     R^2  Q(15)    11.0854524 0.7465136
 Ljung-Box Test     R^2  Q(20)    17.3926942 0.6273468
 LM Arch Test       R    TR^2     10.6781686 0.5566823

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.490900 -5.482286 -5.490904 -5.487830 

Alpha2 is not significant. So I will try GARCH(1,1).

Code
summary(garchFit(~garch(1,1), res.jbht,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x14dd23570>
 [data = res.jbht]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1  
4.9052e-04  5.6905e-06  4.4789e-02  9.3347e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     4.905e-04   2.467e-04    1.988   0.0468 *  
omega  5.690e-06   1.401e-06    4.062 4.87e-05 ***
alpha1 4.479e-02   6.588e-03    6.798 1.06e-11 ***
beta1  9.335e-01   1.036e-02   90.147  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 9864.037    normalized:  2.746878 

Description:
 Thu Apr 11 16:09:27 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1597.3918812 0.0000000
 Shapiro-Wilk Test  R    W         0.9726371 0.0000000
 Ljung-Box Test     R    Q(10)     6.5663567 0.7656506
 Ljung-Box Test     R    Q(15)    10.6645352 0.7759922
 Ljung-Box Test     R    Q(20)    11.8036396 0.9226687
 Ljung-Box Test     R^2  Q(10)     4.6228010 0.9149109
 Ljung-Box Test     R^2  Q(15)    11.1049966 0.7451164
 Ljung-Box Test     R^2  Q(20)    17.4146797 0.6259058
 LM Arch Test       R    TR^2     10.7046610 0.5543844

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.491527 -5.484636 -5.491530 -5.489071 

The best model above with the smallest AIC is GARCH(1,1). So for JBHT stock price the best model is ARIMA(3,0,2)+GRACH(1,1).

Final Model

Best model: ARIMA(0,1,0)+GRACH(1,4)

Code
# returns_ups
summary(fit.ups)
Series: log.ups 
ARIMA(0,1,0) 

sigma^2 = 0.0002134:  log likelihood = 10078.34
AIC=-20154.68   AICc=-20154.68   BIC=-20148.5

Training set error measures:
                       ME       RMSE         MAE         MPE      MAPE
Training set 0.0003808867 0.01460477 0.009700401 0.008393482 0.2164048
                  MASE        ACF1
Training set 0.9998257 -0.01479948
Code
summary(garchFit(~garch(1,4), res.ups,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 4), data = res.ups, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 4)
<environment: 0x154ee1ad8>
 [data = res.ups]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1       beta2       beta3  
3.3748e-04  1.9458e-06  6.1328e-02  1.6404e-01  1.0000e-08  2.1648e-01  
     beta4  
5.5097e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.375e-04   2.048e-04    1.648 0.099332 .  
omega  1.946e-06   5.721e-07    3.401 0.000671 ***
alpha1 6.133e-02   9.333e-03    6.571 5.00e-11 ***
beta1  1.640e-01   1.243e-01    1.319 0.187093    
beta2  1.000e-08   1.024e-01    0.000 1.000000    
beta3  2.165e-01   1.736e-01    1.247 0.212518    
beta4  5.510e-01   1.161e-01    4.744 2.09e-06 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 10379.95    normalized:  2.890546 

Description:
 Thu Apr 11 16:09:27 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1.741128e+04 0.0000000
 Shapiro-Wilk Test  R    W      9.082065e-01 0.0000000
 Ljung-Box Test     R    Q(10)  1.289984e+01 0.2293259
 Ljung-Box Test     R    Q(15)  2.222985e+01 0.1019172
 Ljung-Box Test     R    Q(20)  2.558735e+01 0.1798770
 Ljung-Box Test     R^2  Q(10)  5.098596e+00 0.8844951
 Ljung-Box Test     R^2  Q(15)  6.156418e+00 0.9769988
 Ljung-Box Test     R^2  Q(20)  7.572598e+00 0.9943390
 LM Arch Test       R    TR^2   5.461723e+00 0.9407557

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.777193 -5.765134 -5.777200 -5.772894 
Code
checkresiduals(garch(res.jbht, order = c(4,1),trace = F))


    Ljung-Box test

data:  Residuals
Q* = 6.6616, df = 10, p-value = 0.757

Model df: 0.   Total lags used: 10

The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the GARCH model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.

Best model: ARIMA(3,0,2)+GRACH(1,1)

Code
# returns_jbht
summary(fit.jbht)
Series: log.jbht 
ARIMA(3,0,2) with non-zero mean 

Coefficients:
          ar1     ar2     ar3     ma1     ma2    mean
      -0.6804  0.8165  0.8628  1.6492  0.8232  4.4852
s.e.   0.0548  0.0432  0.0764  0.0609  0.0851  0.5295

sigma^2 = 0.0002688:  log likelihood = 9665.75
AIC=-19317.49   AICc=-19317.46   BIC=-19274.19

Training set error measures:
                       ME       RMSE        MAE        MPE      MAPE      MASE
Training set 0.0005153677 0.01638113 0.01181423 0.01093111 0.2688153 0.9998603
                     ACF1
Training set -0.004793327
Code
summary(garchFit(~garch(1,1), res.jbht,trace = F))

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x1563aa920>
 [data = res.jbht]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1  
4.9052e-04  5.6905e-06  4.4789e-02  9.3347e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     4.905e-04   2.467e-04    1.988   0.0468 *  
omega  5.690e-06   1.401e-06    4.062 4.87e-05 ***
alpha1 4.479e-02   6.588e-03    6.798 1.06e-11 ***
beta1  9.335e-01   1.036e-02   90.147  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 9864.037    normalized:  2.746878 

Description:
 Thu Apr 11 16:09:27 2024 by user:  


Standardised Residuals Tests:
                                   Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  1597.3918812 0.0000000
 Shapiro-Wilk Test  R    W         0.9726371 0.0000000
 Ljung-Box Test     R    Q(10)     6.5663567 0.7656506
 Ljung-Box Test     R    Q(15)    10.6645352 0.7759922
 Ljung-Box Test     R    Q(20)    11.8036396 0.9226687
 Ljung-Box Test     R^2  Q(10)     4.6228010 0.9149109
 Ljung-Box Test     R^2  Q(15)    11.1049966 0.7451164
 Ljung-Box Test     R^2  Q(20)    17.4146797 0.6259058
 LM Arch Test       R    TR^2     10.7046610 0.5543844

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-5.491527 -5.484636 -5.491530 -5.489071 
Code
checkresiduals(garch(res.jbht, order = c(1,1),trace = F))


    Ljung-Box test

data:  Residuals
Q* = 6.453, df = 10, p-value = 0.7759

Model df: 0.   Total lags used: 10

The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the ARIMA model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.

Model Equations

\(\operatorname{ARIMA}(0,1,0)\)

\[ \left(1-B\right) Y_t=\epsilon_t \] \(\operatorname{GARCH}(1,4)\) \[ \sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2+\beta_2 \sigma_{t-2}^2+\beta_3 \sigma_{t-3}^3+\beta_4 \sigma_{t-4}^4 \]

\(\operatorname{ARIMA}(3,0,2)\)

\[ \left(1-\phi_1 B-\phi_2 B^2-\phi_3 B^3\right) Y_t=\left(1+\theta_1 B+\theta_2 B^2\right) \epsilon_t \] \(\operatorname{GARCH}(1,1)\)

\[ \sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2 \]

Code
# returns_ups
Code
# returns_jbht
Back to top